Robust prediction of citywide traffic flows at different time periods plays a crucial role in intelligent transportation systems. While previous work has made great efforts to model spatio-temporal correlations, existing methods still suffer from two key limitations: i) Most models collectively predict all regions' flows without accounting for spatial heterogeneity, i.e., different regions may have skewed traffic flow distributions. ii) These models fail to capture the temporal heterogeneity induced by time-varying traffic patterns, as they typically model temporal correlations with a shared parameterized space for all time periods. To tackle these challenges, we propose a novel Spatio-Temporal Self-Supervised Learning (ST-SSL) traffic prediction framework which enhances the traffic pattern representations to be reflective of both spatial and temporal heterogeneity, with auxiliary self-supervised learning paradigms. Specifically, our ST-SSL is built over an integrated module with temporal and spatial convolutions for encoding the information across space and time. To achieve the adaptive spatio-temporal self-supervised learning, our ST-SSL first performs the adaptive augmentation over the traffic flow graph data at both attribute- and structure-levels. On top of the augmented traffic graph, two SSL auxiliary tasks are constructed to supplement the main traffic prediction task with spatial and temporal heterogeneity-aware augmentation. Experiments on four benchmark datasets demonstrate that ST-SSL consistently outperforms various state-of-the-art baselines. Since spatio-temporal heterogeneity widely exists in practical datasets, the proposed framework may also cast light on other spatial-temporal applications. Model implementation is available at https://github.com/Echo-Ji/ST-SSL.
translated by 谷歌翻译
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to fine-tune the pre-trained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.
translated by 谷歌翻译
Motivation: Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many state-of-the-art computational methods have been proposed in order to efficiently identify enhancers, learning globally contextual features is still one of the challenges for computational methods. Regarding the similarities between biological sequences and natural language sentences, the novel BERT-based language techniques have been applied to extracting complex contextual features in various computational biology tasks such as protein function/structure prediction. To speed up the research on enhancer identification, it is urgent to construct a BERT-based enhancer language model. Results: In this paper, we propose a multi-scale enhancer identification method (iEnhancer-ELM) based on enhancer language models, which treat enhancer sequences as natural language sentences that are composed of k-mer nucleotides. iEnhancer-ELM can extract contextual information of multi-scale k-mers with positions from raw enhancer sequences. Benefiting from the complementary information of k-mers in multi-scale, we ensemble four iEnhancer-ELM models for improving enhancer identification. The benchmark comparisons show that our model outperforms state-of-the-art methods. By the interpretable attention mechanism, we finds 30 biological patterns, where 40% (12/30) are verified by a widely used motif tool (STREME) and a popular dataset (JASPAR), demonstrating our model has a potential ability to reveal the biological mechanism of enhancer. Availability: The source code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Contact: junjiechen@hit.edu.cn and junjie.chen.hit@gmail.com; Supplementary information: Supplementary data are available at Bioinformatics online.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
古本(Guzheng)是一种具有多种演奏技巧的传统中国乐器。乐器演奏技术(IPT)在音乐表演中起着重要作用。但是,大多数现有的IPT检测作品显示出可变长度音频的效率低下,并且在概括方面没有保证,因为它们依靠单个声音库进行训练和测试。在这项研究中,我们建议使用可应用于可变长度音频的完全卷积网络提出了一个端到端的古兴游戏检测系统。由于每种古季的演奏技术都应用于音符,因此对专用的发作探测器进行了训练,可以将音频分为几个音符,并将其预测与框架IPT的预测融合在一起。在融合过程中,我们在每个音符内部添加IPT预测框架,并在每个音符中获得最高概率的IPT作为该注释的最终输出。我们创建了一个来自多个声音银行的名为GZ_ISOTECH的新数据集,并创建了Guzheng性能分析的现实世界录制。我们的方法在框架级准确性和80.76%的笔记级F1得分方面达到了87.97%,超过了现有的作品,这表明我们提出的方法在IPT检测中的有效性。
translated by 谷歌翻译
社会建议利用社会关系来增强建议的代表性学习。大多数社会推荐模型都将用户互动(协作领域)和社会关系(社会领域)的用户表示统一。但是,这种方法可能无法模拟用户在两个域中的异质行为模式,从而损害了用户表示的表现力。在这项工作中,为了解决这种局限性,我们为社会建议提出了一个新颖的截面对比度学习框架DCREC。更具体地说,我们建议从项目和社会域中学习分开的用户表示。此外,分离的对比度学习旨在在分散的用户表示之间进行社交建议之间的知识转移。各种现实世界数据集的全面实验证明了我们提出的模型的优势。
translated by 谷歌翻译
磁共振成像(MRI)是一种重要的非侵入性临床工具,可以产生高分辨率和可重复的图像。然而,高质量的MR图像需要长时间的扫描时间,这导致患者的疲惫和不适,由于患者的自愿运动和非自愿的生理运动,诱导更多人工制品。为了加速扫描过程,通过K空间欠采样和基于深度学习的重建的方法已经推广。这项工作引进了SwinMR,这是一种基于新型的Swin变压器的快速MRI重建方法。整个网络由输入模块(IM)组成,特征提取模块(FEM)和输出模块(OM)。 IM和OM是2D卷积层,并且FEM由级联的残留的Swin变压器块(RSTBS)和2D卷积层组成。 RSTB由一系列SWIN变压器层(STL)组成。 STL的Shifted Windows多头自我关注(W-MSA / SW-MSA)在移位的窗口中执行,而不是整个图像空间中原始变压器的多头自我关注(MSA)。通过使用灵敏度图提出了一种新的多通道损耗,这被证明是为了保留更多纹理和细节。我们在Calgary-Campinas公共大脑MR DataSet中进行了一系列比较研究和消融研究,并在多模态脑肿瘤细分挑战2017年数据集中进行了下游分段实验。结果表明,与其他基准方法相比,我们的SwinMR实现了高质量的重建,并且它在噪音中断和不同的数据集中显示了不同的遮光罩掩模的稳健性。该代码在https://github.com/ayanglab/swinmr公开使用。
translated by 谷歌翻译
人类抓握合成具有许多应用,包括AR / VR,视频游戏和机器人。虽然已经提出了一些方法来为对象抓握和操纵产生现实的手对象交互,但通常只考虑手动与对象交互。在这项工作中,我们的目标是综合全身掌握运动。鉴于3D对象,我们的目标是产生多样化和自然的全身人类动作,方法和掌握物体。这项任务是具有挑战性的,因为它需要建模全身动态和灵巧的手指运动。为此,我们提出了由两个关键部件组成的Saga(随机全身抓取):(a)静态全身抓取姿势。具体地,我们提出了一种多任务生成模型,共同学习静态全身抓姿和人对象触点。 (b)抓住运动infilling。鉴于初始姿势和产生的全身抓握姿势作为运动的起始和结束姿势,我们设计了一种新的联络感知生成运动infilling模块,以产生各种掌握的掌握运动。我们展示了我们方法是第一代生物和表达全身运动的第一代框架,该方法是随机放置并掌握未经看的对象的逼真和表达全身运动。代码和视频可用于:https://jiahaoplus.github.io/saga/saga.html。
translated by 谷歌翻译
联合学习(FL)是一个带有边缘计算的充填地的新兴分布式机器学习范式,是具有在移动边缘设备上具有新颖应用的有前途的区域。在FL中,由于移动设备通过共享模型更新,因此在中央服务器的协调下基于其自身的数据进行组合培训模型,培训数据保持私密。但是,在没有数据的核心可用性的情况下,计算节点需要经常传送模型更新以获得汇聚。因此,本地计算时间与将本地模型更新一起创建本地模型更新以及从服务器发送到服务器的时间导致总时间的延迟。此外,不可靠的网络连接可以妨碍这些更新的有效通信。为了解决这些问题,我们提出了一个延迟有效的流动机制,可以减少模型融合所需的总时间(包括计算和通信延迟)和通信轮。探索各种参数对延迟的影响,我们寻求平衡无线通信(谈话)和本地计算之间的权衡(为工作)。我们与整体时间作为优化问题制定了关系,并通过广泛的模拟展示了我们方法的功效。
translated by 谷歌翻译
近年来,场景文本检测和识别的研究重点已转移到任意形状文本,文本形状表示是一个基本问题。理想的表示应紧凑,完整,高效和可重复使用,以便我们认为后续认可。但是,以前的表示在一个或多个方面存在缺陷。薄板间隙(TPS)转换在场景文本识别方面取得了巨大成功。受到这一点的启发,我们逆转了它的用法,并精致地将TPS视为任意形状文本表示的精美表示。 TPS表示是紧凑,完整和有效的。使用预测的TPS参数,可以将检测到的文本区域直接纠正到近冬季的参数,以帮助后续识别。为了进一步利用TPS表示的潜力,提出了边界对准损失。基于这些设计,我们实现了文本检测器tpsnet,可以方便地将其扩展到文本次数。对几个公共基准的广泛评估和消融表明,提出的文本表示和斑点方法的有效性和优势。特别是,TPSNET在ART数据集上实现了4.4 \%(78.4 \%vs. 74.0 \%)的检测F量改进,并且在5.0 \%(78.5 \%vs. 73.55)上进行了端到端的斑点f-Measure改进。 \%)在总文本上,这是没有铃铛和口哨的大边缘。
translated by 谷歌翻译